Every enterprise is becoming an agent operator - whether they planned to or not. Agents have access to the most critical systems, but there is no guarantee they will not make serious mistakes or be compromised.
Autonomous agents take the first part of their names very seriously and don't necessarily do what their humans tell them to do - or not to do. But the situation is more complicated than that. Generative (genAI) and agentic systems operate quite differently than other systems - including older AI systems - and humans. That means that how tech users and decision-makers phrase instructions, and where those instructions are placed, can make a major difference in outcomes.
During the AI Impact Summit in India, the UK government announced that £27m is now available for AI alignment research, backing some 60 projects. The project combines grant funding for research, access to compute infrastructure and ongoing academic mentorship from AISI's own leading scientists in the field to drive progress in alignment research. Without continued progress in this area, increasingly powerful AI models could act in ways that are difficult to anticipate or control, which could pose challenges for global safety and governance.
It's gonna be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century,
On a personal basis, that means people using AI services want to be able to veto big decisions such as making payments, accessing or using contact details, changing account details, placing orders, or even just seeking clarity during a decision-making process. Extend this way of thinking to the working environment and the resistance is likely to be equally strong in professional settings.